On the left you can see a tempogram of the song Deep Within by the vocal group Greener Grass. You can see that there is a consistent tempo throughout the song, which is rhythmically entirely repetitive, which is probably helped the algorhithm to find the tempo of the song. The tempogram catches very well what is going on, even the slowing down of the song at the end. I chose to use a tempo octave restriction, since in the unrestricted tempogram the strongest yellow line happened at around 200 BPM, which is much too fast for this song.
Introduction: In my corpus I’ve chosen to compare two playlists that are matched to the specific song Station Atlantis. I released that song on Spotify last November. Many artist use the promotional tool of making a matching playlist to a song that they plan to release, and then add the song to the playlist, that by then ideally has gained some followers, on the release date. Spotify also makes playlists that are matched to a specific song called Radio’s. This leaves us with two playlists: Café Atlatis, which was made by the person that wrote the song in question, and Radio Station Atlantis which was made by with the Spotify algorhithm. This ultimately can contribute to a number of fun questions, such as: does the A.I. know the artist better than (s)he knows him/herself? Does human intuition in playlist making differ from the choices that the algorhithm make in playlist making? These questions can ideally be explored with much more material; since making a playlist for your own song is a widespread promotional tool, there are many of these playlists to be found accross genres.
One problem with this choice of corpus is that the Spotify Song Radio’s are customized to your personal taste based on the listening history of your Spotify account. This creates a problem for my data because that already cause the material in Radio Station Atlantis to be closer to the material in Café Atlantis, as both are now adapted to my taste. My solution to this was to load a playlist from a new Spotify account (without a listening history). However, I would like to find a better solution to this eventually, or slightly alter the setup of my analysis.
In the boxplot on your left you can see “Café Atlantis” (playlist by the artist) and Radio Atlantis (playlist by the Spotify algorhithm) compared on the parameters of valence ( the ‘positiveness’ conveyed by the track, 1.00 being the positive and 0.00 the negative extreme of the spectrum) and track popularity (how popular the individual tracks in the playlist are in general, so not how often they are played in this particular playlist, but in total everywhere). Colour is mapped onto energy, meaning that very high energy songs are yellow, and very low energy songs are dark green (as shown in the legend). You can see that most songs are somewhere in the middle of this range, and there are more extremes on the low end than on the high end of this spectrum. The red arrow points to the song Station Atlantis, the song that was the guideline for both playlists.
I found it interesting to see that although the valence in both playlists is quite similar, the range of the track popularity is much wider in Café Atlantis that in Radio Atlantis. This might be because the algorhithm included the demographic that Station Atlantis came from a starting-out, unknown artist and therefore included other starting-out artists (perhaps also from the Netherlands particularly) in the Radio, whereas I as an artist included both my heroes (often very famous artists) and my friends (not very famous) in the playlist, causing a much wider range.
On the left you can see a chromagram of the song Station Atlantis. Since this song was the starting point for both playlist under investigation, I thought it would be interesting to have a closer look at it. Unfortunately I don’t find the chromagram very informative. I aim to take a second look at this later, and try to find more suitable songs to pick or make better chromagrams. One reason this chromagram may be a little uninformative is because the guitar part of the song consists mainly of arpeggiated chords that quickly alternate. With a melody, that also makes big jumps, on top of those arpeggio’s, it is hard to distinguish what’s going on in the song harmonically, especially because the chromagram doesn’t visualize the actual pitch, only the pitch class.
This is a chromagram for the song May You Never by John Martin. May You Never is a song in the playlist Café Atlantis, and was an outlier in the condition of valence, as you can see on the graph two tabs earlier. I know from listening to the song that this is probably because the lyrics of the song are quite positive; the general message is something along the lines of “I wish you the best, I hope bad things such as x, y and z may never happen to you.” However, perhaps there are also harmonic features to the song that make it an outlier in the valence department. Let’s look at the chromagram to the left. Unfortunately I wasn’t yet able to do much with the chromagrams. I can tell Station Atlantis and May You Never are probably in a different key and (obviously) use different notes. One thing that seems obvious is that in this chromagram, compared to the previous one, you can see a more clear set of key notes that the song evolves around.
In this self similarity the chroma values of the song Station Atlantis, which serves as the starting point for both playlists under analysis, are compared. I found it interesting to see that analysing chroma features really resulted in a graph that reveals the structure of the song. On the outer left, you can see the intro of the song in dark blue. The guitar part that is the intro, is repeated at the end of the song, which you can see in the far right bottom corner, or the right top corner. The rest of the song consists of a short theme on guitar, alternated by verses. The guitar part is visible as mid blue small rectangles along the axes, and the verses are in bright yellow. The conclusion to be drawn from this is that Station Atlantis is a song with clearly harmonically demarcated parts that make up a song structure.
Interesting to see is then, that a self similarity matrix based on analysis on timbre, does show a similar structure, but hardly visible because the matrix shows very little contrast. Only if you look at the left lower and right upper corner can you see the intro, and the repeating quitar part in between the verses can be detected as creating a checker pattern across the matrix. The timbre seems to be somewhat the same throughout the entire song, according to the graph. This is mostly true, as voice and guitar are the only instruments used in the recording. However I did expect the verses to somehow pop out in the matrix, as they did in the harmonic analysis, since those are the only parts of the song that have vocals on them.
A self-proclaimed outlier in the playlist Café Atlantis that I found made an interesting case study is the song Beneath the Neural Waves by Papi Thereso. Papi Thereso makes synthesizer based soundscapes. When analysing BTNW harmonically, the self similarity matrix reveals that there seems to be no harmonic structure in the piece. Which makes total sense, because the piece evolves mainly around sound and ambience and pitches alternate all around the pitch spectrum. I changed the color scale for fun and also to make it easy to spot the song you’re dealing with.
When we take a look at the same song analysed on timbre features though, we can see that the sound changes at around two thirds into the song, into a kind of ‘sound coda’. So we could say that this piece has a compositional structure when it comes to sound design, but not when it comes to harmony, which mirrors my analysis of Station Atlantis, where the matrices revealed a harmonic compositional structure but not a sonic one. I therefore proved myself right to have claimed Beneath the Neural Waves to be an outlier :).
Now, let’s take a look at the cepstrogram of Beneath the Neural Waves, since this song has an interesting sonic development. Keeping the self similarity matrix we just saw in mind, we can see in this cepstrogram that indeed, most of the core timbre features (c01, c02, c03, c05 and a few other higher up) change significantly around two thirds of the song. This shows again the sonic contrast between the first 140 seconds of the song and the rest after it.
*** On the left you see a chordogram of the song Blue Water that is in
the playlist Café Atlantis. I chose this song for analysis because it
has two repetitive harmonic sequences, one for the first half of the
song, and then the other until the end, and I was wondering if this
would show up in the chordogram and in what way. As you can see when you
take a general glance at the gram, there is indeed a change at around
the middle of the song.
When we take a closer look, we can see that the beginning of the song seems to be in G, and alternating to C, as the parts for G7, G major and Gminor are the darkest blue in this part, but switch for parts where Cmin, Cmaj and C7 are darkest. By ear, I would say this part is in the tonality of Gminor, alternating to Cmin. There is a lot going on in the song, there are bells in the high register, synths, and several (sometimes off key) vocals, and most of it is ‘organic’, by which I mean the instruments used are not computers and are mostly analogue. This probably explains for the fact that the it is harder to compute whether the chords are in major/minor/dominant. A sometimes off key, or differently intonated on different instruments third or seventh then accounts for the fact that a major as well as a minor as well as a dominant chord is detected.
I found it cool to see that the part of the song where a kind of transition happens between the first harmonic structure and the second is kind of blurry in the graph and there seems to be a kind of diagonally upward movement, which made me wonder because it makes it seem like the song somehow goes up the circle of fifths, which I don’t hear in the recording. However, what I can connect from the recording and the visual is that in the beginning the tonality of Gmin still lingers, and it slowly dissolves into higher pitched sounds.
In the second part of the song, the tonality kind of stays the same, but now the chords (by ear) go between Gmin, E flat major, C min and Bmajor. I do see these chords in the chordogram, in the way I also saw them in the first part, so that the basenote chords are highlighted. Gmin and Eb major have a lot of the same note. I also see Abmajor which is intesting. ,,,,,
On the left you can see a tempogram of the song Deep Within by the vocal group Greener Grass. You can see that there is a consistent tempo throughout the song, which is rhythmically entirely repetitive, which is probably helped the algorhithm to find the tempo of the song. The tempogram catches very well what is going on, even the slowing down of the song at the end. I chose to use a tempo octave restriction, since in the unrestricted tempogram the strongest yellow line happened at around 200 BPM, which is much too fast for this song.